Skip to content

Add cuBLAS mm_out shim to eliminate libtorch runtime dependency#19360

Open
digantdesai wants to merge 1 commit into
mainfrom
cublas-mm-shim
Open

Add cuBLAS mm_out shim to eliminate libtorch runtime dependency#19360
digantdesai wants to merge 1 commit into
mainfrom
cublas-mm-shim

Conversation

@digantdesai
Copy link
Copy Markdown
Contributor

@digantdesai digantdesai commented May 7, 2026

Implements aoti_torch_cuda_mm_out as a thin cuBLAS wrapper in the ExecuTorch AOTI CUDA shims. When Inductor picks cuBLAS over Triton templates for aten::mm (F.linear), the compiled .so requires this symbol at runtime.

In practice, Inductor's autotune on A100 picks Triton templates for the Qwen3.5 MoE dense projections (bf16 [M,2048]x[2048,N]), so the shim is not exercised for this model. It serves as a safety net for models or shapes where cuBLAS wins the autotune.

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented May 7, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19360

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 1 New Failure, 1 Cancelled Job, 4 Unrelated Failures

As of commit b6b4ad7 with merge base 8ae05c2 (image):

NEW FAILURE - The following job has failed:

CANCELLED JOB - The following job was cancelled. Please retry:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 7, 2026
@digantdesai digantdesai added ciflow/cuda and removed CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. labels May 7, 2026
@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 7, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 7, 2026
Implements aoti_torch_cuda_mm_out as a thin cuBLAS wrapper in the
ExecuTorch AOTI CUDA shims. When Inductor picks cuBLAS over Triton
templates for aten::mm (F.linear), the compiled .so requires this
symbol at runtime. Without this shim, it resolves from
libtorch_cuda.so, pulling in the full libtorch runtime.

In practice, Inductor's autotune on A100 picks Triton templates for
the Qwen3.5 MoE dense projections (bf16 [M,2048]x[2048,N]), so the
shim is not exercised for this model. It serves as a safety net for
models or shapes where cuBLAS wins the autotune, ensuring fully
libtorch-free AOTI CUDA deployment in all cases.

Co-authored-by: Claude <noreplyanthropic.com>
@Gasoonjia Gasoonjia changed the title Add cuBLAS mm_out shim to eliminate libtorch runtime dependency Add cuBLAS mm_out shim to cuda backend May 7, 2026
@Gasoonjia Gasoonjia changed the title Add cuBLAS mm_out shim to cuda backend Add cuBLAS mm_out shim to eliminate libtorch runtime dependency May 7, 2026
@Gasoonjia
Copy link
Copy Markdown
Contributor

can you help me to update the title and summary a little bit? one thing is our cuda backend never depend on libtorch; our current state sounds like we are depending on it.

@digantdesai
Copy link
Copy Markdown
Contributor Author

can you help me to update the title and summary a little bit? one thing is our cuda backend never depend on libtorch; our current state sounds like we are depending on it.

Done. If we are using F.linear in the kernel, which can fallback on cuda, we need to add this even if qwen happen to not use it.

@Gasoonjia
Copy link
Copy Markdown
Contributor

can you help me to update the title and summary a little bit? one thing is our cuda backend never depend on libtorch; our current state sounds like we are depending on it.

Done. If we are using F.linear in the kernel, which can fallback on cuda, we need to add this even if qwen happen to not use it.

thanks. Mind update the title as well?

Also:

  1. Can you use a model with only one mm operator to see how much perf gain we can have when we use cublas mm, compare with previous triton kernel version, in oeprator-level?

  2. One thing i'm worried about is using aten::mm_out will block aoti compile from fusing mm kernel with others. Can you try to compare the perf in some real model, like qwen or gemma?

@digantdesai digantdesai marked this pull request as ready for review May 12, 2026 16:39
Copilot AI review requested due to automatic review settings May 12, 2026 16:39
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds an ExecuTorch AOTI CUDA shim implementation of aoti_torch_cuda_mm_out backed by cuBLAS to avoid requiring libtorch_cuda.so at runtime when Inductor emits calls to that symbol (e.g., for aten::mm / F.linear).

Changes:

  • Introduces a cuBLAS-based aoti_torch_cuda_mm_out shim (mm.h / mm.cu) and links cuBLAS into aoti_cuda_shims.
  • Adds a new GTest suite covering correctness (bf16/fp16/fp32) and contract validation for mm_out.
  • Registers the new test in both CMake and Buck/Bazel test target definitions.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
backends/cuda/runtime/shims/mm.h Declares the exported aoti_torch_cuda_mm_out shim API.
backends/cuda/runtime/shims/mm.cu Implements aoti_torch_cuda_mm_out via cublasGemmEx with per-device handle management.
backends/cuda/CMakeLists.txt Adds mm.cu to shim sources and links CUDA::cublas into aoti_cuda_shims.
backends/cuda/runtime/shims/tests/test_aoti_torch_cuda_mm_out.cpp Adds typed correctness tests and contract-validation tests for the new shim.
backends/cuda/runtime/shims/tests/CMakeLists.txt Builds and registers the new mm_out test binary.
backends/cuda/runtime/shims/tests/targets.bzl Registers the new mm_out test in shared Buck/Bazel target definitions.
Comments suppressed due to low confidence (1)

backends/cuda/runtime/shims/tests/targets.bzl:46

  • This adds a Buck test target for aoti_torch_cuda_mm_out, but //executorch/backends/cuda/runtime:runtime_shims (a dependency of cuda_shim_cpp_unittest) does not currently list shims/mm.cu in srcs or shims/mm.h in headers. As a result, the test will fail to compile/link under Buck. Please add the new shim source/header to the runtime_shims library (and any needed CUDA/cublas deps there) or adjust the test deps to a target that exports these files.
    cuda_shim_cpp_unittest("aoti_torch_cuda_rand")
    cuda_shim_cpp_unittest("aoti_torch_new_tensor_handle")
    cuda_shim_cpp_unittest("aoti_torch_item_bool")
    cuda_shim_cpp_unittest("aoti_torch_assign_tensors_out")
    cuda_shim_cpp_unittest("aoti_torch_cuda_mm_out")


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +123 to +129
// Per-device handle; mutex in get() ensures thread-safe initialization.
// cublasSetStream + cublasGemmEx are serialized under the same mutex to
// prevent races when multiple threads share a device.
auto& handles = cublas_handles();
std::lock_guard<std::mutex> lock(handles.mutex);
cublasHandle_t handle = handles.get(device);
cublasSetStream(handle, stream_result.get());
Comment on lines +31 to +32
* @param self Input matrix [M, K]. Must be bf16 or fp16, 2D, contiguous.
* @param mat2 Input matrix [K, N]. Must be bf16 or fp16, 2D, contiguous.
Comment on lines +71 to +85
# mm_out test — cuBLAS is already linked into aoti_cuda_shims
add_executable(test_aoti_torch_cuda_mm_out test_aoti_torch_cuda_mm_out.cpp)

target_include_directories(
test_aoti_torch_cuda_mm_out PRIVATE ${EXECUTORCH_ROOT}/.. ${EXECUTORCH_ROOT}
${CUDAToolkit_INCLUDE_DIRS}
)

target_compile_definitions(test_aoti_torch_cuda_mm_out PRIVATE CUDA_AVAILABLE=1)

target_link_libraries(
test_aoti_torch_cuda_mm_out
PRIVATE GTest::gtest GTest::gtest_main aoti_cuda_shims executorch_core
CUDA::cudart
)
Comment on lines +277 to +289
TEST_F(AOTITorchMmOutTest, NonContiguousRejected) {
// Create a [8, 8] tensor and slice rows to get non-contiguous [4, 8]
int64_t big_sizes[] = {8, 8};
int64_t big_strides[] = {8, 1};
Tensor* big = nullptr;
aoti_torch_empty_strided(
2,
big_sizes,
big_strides,
static_cast<int32_t>(slim_c10::ScalarType::Float),
static_cast<int32_t>(slim_c10::DeviceType::CUDA),
0,
&big);
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/cuda CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants